My python program crashed with PyTorch >1.10.1

您所在的位置:网站首页 program crashed My python program crashed with PyTorch >1.10.1

My python program crashed with PyTorch >1.10.1

2024-06-26 16:17:55| 来源: 网络整理| 查看: 265

Hello, I’m working on a deep learning project with a ViT-like backbone. When I train a model under PyTorch > 1.10.1 (I’ve already tried 1.12.0, 1.12.1, 1.13.0), my program crashed. It will show the following error:

[E ProcessGroupNCCL.cpp:737] [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=63841, OpType=BROADCAST, Timeout(ms)=1800000) ran for 1800880 milliseconds before timing out. [E ProcessGroupNCCL.cpp:737] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=63841, OpType=BROADCAST, Timeout(ms)=1800000) ran for 1800908 milliseconds before timing out. [E ProcessGroupNCCL.cpp:737] [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=63841, OpType=BROADCAST, Timeout(ms)=1800000) ran for 1800958 milliseconds before timing out. [E ProcessGroupNCCL.cpp:414] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down. [E ProcessGroupNCCL.cpp:414] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down. terminate called after throwing an instance of 'std::runtime_error' what(): [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=63841, OpType=BROADCAST, Timeout(ms)=1800000) ran for 1800880 milliseconds before timing out. terminate called after throwing an instance of 'std::runtime_error' what(): [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=63841, OpType=BROADCAST, Timeout(ms)=1800000) ran for 1800958 milliseconds before timing out. [E ProcessGroupNCCL.cpp:414] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down. terminate called after throwing an instance of 'std::runtime_error' what(): [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=63841, OpType=BROADCAST, Timeout(ms)=1800000) ran for 1800908 milliseconds before timing out. WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 1000892 closing signal SIGTERM ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -6) local_rank: 1 (pid: 1000893) of binary: /opt/conda/envs/lwh3/bin/python Traceback (most recent call last): File "/opt/conda/envs/lwh3/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/opt/conda/envs/lwh3/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/opt/conda/envs/lwh3/lib/python3.8/site-packages/torch/distributed/launch.py", line 193, in main() File "/opt/conda/envs/lwh3/lib/python3.8/site-packages/torch/distributed/launch.py", line 189, in main launch(args) File "/opt/conda/envs/lwh3/lib/python3.8/site-packages/torch/distributed/launch.py", line 174, in launch run(args) File "/opt/conda/envs/lwh3/lib/python3.8/site-packages/torch/distributed/run.py", line 752, in run elastic_launch( File "/opt/conda/envs/lwh3/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/opt/conda/envs/lwh3/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 245, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ======================================================== lib/train/run_training.py FAILED -------------------------------------------------------- Failures: [1]: time : 2022-11-12_10:44:17 host : ed85ab297bc3 rank : 2 (local_rank: 2) exitcode : -6 (pid: 1000894) error_file: traceback : Signal 6 (SIGABRT) received by PID 1000894 [2]: time : 2022-11-12_10:44:17 host : ed85ab297bc3 rank : 3 (local_rank: 3) exitcode : -6 (pid: 1000895) error_file: traceback : Signal 6 (SIGABRT) received by PID 1000895 -------------------------------------------------------- Root Cause (first observed failure): [0]: time : 2022-11-12_10:44:17 host : ed85ab297bc3 rank : 1 (local_rank: 1) exitcode : -6 (pid: 1000893) error_file: traceback : Signal 6 (SIGABRT) received by PID 1000893 ========================================================

When I use PyTorch 1.10.1, everything went well, however I need to use some funtions which only exist in PyTorch >= 1.12.0. This problem really confused me, I’m wondering if anyone could help me with this?



【本文地址】

公司简介

联系我们

今日新闻


点击排行

实验室常用的仪器、试剂和
说到实验室常用到的东西,主要就分为仪器、试剂和耗
不用再找了,全球10大实验
01、赛默飞世尔科技(热电)Thermo Fisher Scientif
三代水柜的量产巅峰T-72坦
作者:寞寒最近,西边闹腾挺大,本来小寞以为忙完这
通风柜跟实验室通风系统有
说到通风柜跟实验室通风,不少人都纠结二者到底是不
集消毒杀菌、烘干收纳为一
厨房是家里细菌较多的地方,潮湿的环境、没有完全密
实验室设备之全钢实验台如
全钢实验台是实验室家具中较为重要的家具之一,很多

推荐新闻


图片新闻

实验室药品柜的特性有哪些
实验室药品柜是实验室家具的重要组成部分之一,主要
小学科学实验中有哪些教学
计算机 计算器 一般 打孔器 打气筒 仪器车 显微镜
实验室各种仪器原理动图讲
1.紫外分光光谱UV分析原理:吸收紫外光能量,引起分
高中化学常见仪器及实验装
1、可加热仪器:2、计量仪器:(1)仪器A的名称:量
微生物操作主要设备和器具
今天盘点一下微生物操作主要设备和器具,别嫌我啰嗦
浅谈通风柜使用基本常识
 众所周知,通风柜功能中最主要的就是排气功能。在

专题文章

    CopyRight 2018-2019 实验室设备网 版权所有 win10的实时保护怎么永久关闭